2 research outputs found
Vistaar: Diverse Benchmarks and Training Sets for Indian Language ASR
Improving ASR systems is necessary to make new LLM-based use-cases accessible
to people across the globe. In this paper, we focus on Indian languages, and
make the case that diverse benchmarks are required to evaluate and improve ASR
systems for Indian languages. To address this, we collate Vistaar as a set of
59 benchmarks across various language and domain combinations, on which we
evaluate 3 publicly available ASR systems and 2 commercial systems. We also
train IndicWhisper models by fine-tuning the Whisper models on publicly
available training datasets across 12 Indian languages totalling to 10.7K
hours. We show that IndicWhisper significantly improves on considered ASR
systems on the Vistaar benchmark. Indeed, IndicWhisper has the lowest WER in 39
out of the 59 benchmarks, with an average reduction of 4.1 WER. We open-source
all datasets, code and models.Comment: Accepted in INTERSPEECH 202
Towards Building ASR Systems for the Next Billion Users
Recent methods in speech and language technology pretrain very LARGE models
which are fine-tuned for specific tasks. However, the benefits of such LARGE
models are often limited to a few resource rich languages of the world. In this
work, we make multiple contributions towards building ASR systems for low
resource languages from the Indian subcontinent. First, we curate 17,000 hours
of raw speech data for 40 Indian languages from a wide variety of domains
including education, news, technology, and finance. Second, using this raw
speech data we pretrain several variants of wav2vec style models for 40 Indian
languages. Third, we analyze the pretrained models to find key features:
codebook vectors of similar sounding phonemes are shared across languages,
representations across layers are discriminative of the language family, and
attention heads often pay attention within small local windows. Fourth, we
fine-tune this model for downstream ASR for 9 languages and obtain
state-of-the-art results on 3 public datasets, including on very low-resource
languages such as Sinhala and Nepali. Our work establishes that multilingual
pretraining is an effective strategy for building ASR systems for the
linguistically diverse speakers of the Indian subcontinent. Our code, data and
models are available publicly at https://indicnlp.ai4bharat.org/indicwav2vec/
and we hope they will help advance research in ASR for Indic languages